204 research outputs found

    NasHD: Efficient ViT Architecture Performance Ranking using Hyperdimensional Computing

    Full text link
    Neural Architecture Search (NAS) is an automated architecture engineering method for deep learning design automation, which serves as an alternative to the manual and error-prone process of model development, selection, evaluation and performance estimation. However, one major obstacle of NAS is the extremely demanding computation resource requirements and time-consuming iterations particularly when the dataset scales. In this paper, targeting at the emerging vision transformer (ViT), we present NasHD, a hyperdimensional computing based supervised learning model to rank the performance given the architectures and configurations. Different from other learning based methods, NasHD is faster thanks to the high parallel processing of HDC architecture. We also evaluated two HDC encoding schemes: Gram-based and Record-based of NasHD on their performance and efficiency. On the VIMER-UFO benchmark dataset of 8 applications from a diverse range of domains, NasHD Record can rank the performance of nearly 100K vision transformer models with about 1 minute while still achieving comparable results with sophisticated models

    NRPA: Neural Recommendation with Personalized Attention

    Full text link
    Existing review-based recommendation methods usually use the same model to learn the representations of all users/items from reviews posted by users towards items. However, different users have different preference and different items have different characteristics. Thus, the same word or similar reviews may have different informativeness for different users and items. In this paper we propose a neural recommendation approach with personalized attention to learn personalized representations of users and items from reviews. We use a review encoder to learn representations of reviews from words, and a user/item encoder to learn representations of users or items from reviews. We propose a personalized attention model, and apply it to both review and user/item encoders to select different important words and reviews for different users/items. Experiments on five datasets validate our approach can effectively improve the performance of neural recommendation.Comment: 4 pages, 4 figure

    Coating Condition Detection And Assessment On The Steel Girder Of A Bridge Through Hyperspectral Imaging

    Get PDF
    The organic coating of bridge steel girders is subjected to physical scratches, corrosion, and aging in natural weathering. The breakdown of the coating may cause serviceability and safety problems if left unnoticed. Conventional coating inspection is time-consuming and lacks information about the coating\u27s chemical integrity. A hyperspectral imaging method is proposed to detect the condition of steel coatings based on coating-responsive features in reflectance spectra. A field test was conducted on the real-world bridge, which shows obvious signs of degradation. The hyperspectral signature enables an assessment of the coating\u27s health and defect severity. The results indicated that the coating scratch can be effectively located in the domain of a hyperspectral image and the scratch depth can be determined by mapping a scratch depth indicator (SDI = R532 nm/R641 nm). Rust sources and products in steel girders can be identified by the unique spectral signatures in the VNIR range, and the rust stains (and thus stain areas) scattered on the coating can be pinpointed at pixel level by the chloride rust (CR) indicators \u3e1.11 (CR = R733 nm/R841 nm). The chemical integrity of a topcoat is demonstrated by the short-wave infrared spectroscopy and the topcoat degradation can be evaluated by the decreased absorption at 8000 cm−1 and 5850 cm−1. Hyperspectral imaging enables faster and more reliable coating condition detection by the spectral features and provides an alternative for multi-object coating detection

    Spatial Self-Distillation for Object Detection with Inaccurate Bounding Boxes

    Full text link
    Object detection via inaccurate bounding boxes supervision has boosted a broad interest due to the expensive high-quality annotation data or the occasional inevitability of low annotation quality (\eg tiny objects). The previous works usually utilize multiple instance learning (MIL), which highly depends on category information, to select and refine a low-quality box. Those methods suffer from object drift, group prediction and part domination problems without exploring spatial information. In this paper, we heuristically propose a \textbf{Spatial Self-Distillation based Object Detector (SSD-Det)} to mine spatial information to refine the inaccurate box in a self-distillation fashion. SSD-Det utilizes a Spatial Position Self-Distillation \textbf{(SPSD)} module to exploit spatial information and an interactive structure to combine spatial information and category information, thus constructing a high-quality proposal bag. To further improve the selection procedure, a Spatial Identity Self-Distillation \textbf{(SISD)} module is introduced in SSD-Det to obtain spatial confidence to help select the best proposals. Experiments on MS-COCO and VOC datasets with noisy box annotation verify our method's effectiveness and achieve state-of-the-art performance. The code is available at https://github.com/ucas-vg/PointTinyBenchmark/tree/SSD-Det.Comment: accepted by ICCV 202

    Research on characteristics of removing particles in ship exhaust gas by charged droplet

    Get PDF
    A traditional water scrubber is able to remove particles with a size over 200 μm in ship engine exhaust effectively. However, as the size of the particles decreases, the removal efficiency of the particles is gradually reduced, especially when the particle size is less than 50 μm, the method almost has little effect. This paper presents a study of charging particles in exhaust gas and water droplets to improve water scrubber’s efficiency in removing fine particles. The charging of the particles is mainly achieved through corona discharge, while the water droplets are charged by passing the high-voltage electricity to the nozzle. However, the feasibility and economics of these two methods have not been verified in other researches, so they are numerically simulated by Comsol Multiphysics software in this paper. The simulation results show that both particles and droplets can be charged steadily by the two methods. The numerical simulation results also indicate that the removal efficiency of particles in ship exhaust gas can be greatly improved by adding charges to droplets and particles at the same time. And a line chart of particle capture efficiency map under different particle sizes and change of droplets is obtained

    Evoke: Evoking Critical Thinking Abilities in LLMs via Reviewer-Author Prompt Editing

    Full text link
    Large language models (LLMs) have made impressive progress in natural language processing. These models rely on proper human instructions (or prompts) to generate suitable responses. However, the potential of LLMs are not fully harnessed by commonly-used prompting methods: many human-in-the-loop algorithms employ ad-hoc procedures for prompt selection; while auto prompt generation approaches are essentially searching all possible prompts randomly and inefficiently. We propose Evoke, an automatic prompt refinement framework. In Evoke, there are two instances of a same LLM: one as a reviewer (LLM-Reviewer), it scores the current prompt; the other as an author (LLM-Author), it edits the prompt by considering the edit history and the reviewer's feedback. Such an author-reviewer feedback loop ensures that the prompt is refined in each iteration. We further aggregate a data selection approach to Evoke, where only the hard samples are exposed to the LLM. The hard samples are more important because the LLM can develop deeper understanding of the tasks out of them, while the model may already know how to solve the easier cases. Experimental results show that Evoke significantly outperforms existing methods. For instance, in the challenging task of logical fallacy detection, Evoke scores above 80, while all other baseline methods struggle to reach 20

    PB2 segment promotes high-pathogenicity of H5N1 avian influenza viruses in mice

    Get PDF
    H5N1 influenza viruses with high lethality are a continuing threat to humans and poultry. Recently, H5N1 high-pathogenicity avian influenza virus (HPAIV) has been shown to transmit through aerosols between ferrets in lab experiments by acquiring some mutation. This is another deeply aggravated threat of H5N1 HPAIV to humans. To further explore the molecular determinant of H5N1 HPAIV virulence in a mammalian model, we compared the virulence of A/Duck/Guangdong/212/2004 (DK212) and A/Quail/Guangdong/90/2004 (QL90). Though they were genetically similar, they had different pathogenicity in mice, as well as their 16 reassortants. The results indicated that a swap of the PB2 gene could dramatically decrease the virulence of rgDK212 in mice (1896-fold) but increase the virulence of rgQL90 in mice (60-fold). Furthermore, the polymerase activity assays showed that swapping PB2 genes between these two viruses significantly changed the activity of polymerase complexes in 293T cells. The mutation Ser715Asn in PB2 sharply attenuated the virulence of rgDK212 in mice (2710-fold). PB2 segment promotes high-pathogenicity of H5N1 avian influenza viruses in mice and 715 Ser in PB2 plays an important role in determing high virulence of DK212 in mice

    Application of Causal Inference to Genomic Analysis: Advances in Methodology

    Get PDF
    The current paradigm of genomic studies of complex diseases is association and correlation analysis. Despite significant progress in dissecting the genetic architecture of complex diseases by genome-wide association studies (GWAS), the identified genetic variants by GWAS can only explain a small proportion of the heritability of complex diseases. A large fraction of genetic variants is still hidden. Association analysis has limited power to unravel mechanisms of complex diseases. It is time to shift the paradigm of genomic analysis from association analysis to causal inference. Causal inference is an essential component for the discovery of mechanism of diseases. This paper will review the major platforms of the genomic analysis in the past and discuss the perspectives of causal inference as a general framework of genomic analysis. In genomic data analysis, we usually consider four types of associations: association of discrete variables (DNA variation) with continuous variables (phenotypes and gene expressions), association of continuous variables (expressions, methylations, and imaging signals) with continuous variables (gene expressions, imaging signals, phenotypes, and physiological traits), association of discrete variables (DNA variation) with binary trait (disease status) and association of continuous variables (gene expressions, methylations, phenotypes, and imaging signals) with binary trait (disease status). In this paper, we will review algorithmic information theory as a general framework for causal discovery and the recent development of statistical methods for causal inference on discrete data, and discuss the possibility of extending the association analysis of discrete variable with disease to the causal analysis for discrete variable and disease
    • …
    corecore